ethics problem
When good AI ethics is just good AI engineering
Most issues in AI ethics are side effects of subpar engineering. This doesn't mean they aren't ethical issues: if your bridge falls and kills people because you tried to save money using bad materials, that is an ethical problem. But it's one that can be prevented by using the right materials -- by getting the engineering right -- without having to invent an "ethical bridge" that doesn't fall or setting up a Bridge Ethics Committee inside your building company. The bridge didn't have an ethics problem, the people managing the project did. There isn't a complete taxonomy of ethical issues in AI, and as more of our world gets built with, run by, or just called AI, it'll be harder to build one. The following are just a couple of examples of issues in AI ethics that are rather the ethically questionable side effects of improper AI engineering.
AI's ethics problem: Abstractions everywhere but where are the rules?
Machines that make decisions about us: what could possibly go wrong? Essays, speeches, seminars pose that question year after year as artificial intelligence research makes stunning advances. Baked-in biases in algorithms are only one of many issues as a result. Jonathan Shaw, managing editor, Harvard Magazine, wrote earlier this year: "Artificial intelligence can aggregate and assess vast quantities of data that are sometimes beyond human capacity to analyze unaided, thereby enabling AI to make hiring recommendations, determine in seconds the creditworthiness of loan applicants, and predict the chances that criminals will re-offend." Again, what could possibly go wrong?
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.16)
- Europe > United Kingdom > England > Greater London > London (0.05)
- Law (0.35)
- Government (0.33)